Goto

Collaborating Authors

 dendritic tree


Scalable Bayesian inference of dendritic voltage via spatiotemporal recurrent state space models

Ruoxi Sun, Scott Linderman, Ian Kinsella, Liam Paninski

Neural Information Processing Systems

Recent progress in the development of voltage indicators [1-8] has brought us closer to a longstanding goal incellular neuroscience: imaging the full spatiotemporal voltageonadendritic tree. These recordings have the potential (pun not intended) to resolve fundamental questions about the computations performed by dendrites -- questions that have remained open for more than a century[9,10].



The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks

Spieler, Aaron, Rahaman, Nasim, Martius, Georg, Schölkopf, Bernhard, Levina, Anna

arXiv.org Artificial Intelligence

Biological cortical neurons are remarkably sophisticated computational devices, temporally integrating their vast synaptic input over an intricate dendritic tree, subject to complex, nonlinearly interacting internal biological processes. A recent study proposed to characterize this complexity by fitting accurate surrogate models to replicate the input-output relationship of a detailed biophysical cortical pyramidal neuron model and discovered it needed temporal convolutional networks (TCN) with millions of parameters. Requiring these many parameters, however, could be the result of a misalignment between the inductive biases of the TCN and cortical neuron's computations. In light of this, and with the aim to explore the computational implications of leaky memory units and nonlinear dendritic processing, we introduce the Expressive Leaky Memory (ELM) neuron model, a biologically inspired phenomenological model of a cortical neuron. Remarkably, by exploiting a few such slowly decaying memory-like hidden states and two-layered nonlinear integration of synaptic input, our ELM neuron can accurately match the aforementioned input-output relationship with under ten-thousand trainable parameters. To further assess the computational ramifications of our neuron design, we evaluate on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets, as well as a novel neuromorphic dataset based on the Spiking Heidelberg Digits dataset (SHD-Adding). Leveraging a larger number of memory units with sufficiently long timescales, and correspondingly sophisticated synaptic integration, the ELM neuron proves to be competitive on both datasets, reliably outperforming the classic Transformer or Chrono-LSTM architectures on latter, even solving the Pathfinder-X task with over $70\%$ accuracy (16k context length).


The Clusteron: Toward a Simple Abstraction for a Complex Neuron

Neural Information Processing Systems

Are single neocortical neurons as powerful as multi-layered networks? A recent compartmental modeling study has shown that voltage-dependent membrane nonlinearities present in a complex dendritic tree can provide a virtual layer of local nonlinear processing elements between synaptic in(cid:173) puts and the final output at the cell body, analogous to a hidden layer in a multi-layer network. In this paper, an abstract model neuron is in(cid:173) troduced, called a clusteron, which incorporates aspects of the dendritic "cluster-sensitivity" phenomenon seen in these detailed biophysical mod(cid:173) eling studies. It is shown, using a clusteron, that a Hebb-type learning rule can be used to extract higher-order statistics from a set of train(cid:173) ing patterns, by manipulating the spatial ordering of synaptic connections onto the dendritic tree. The potential neurobiological relevance of these higher-order statistics for nonlinear pattern discrimination is then studied within a full compartmental model of a neocortical pyramidal cell, using a training set of 1000 high-dimensional sparse random patterns.


Dendritic Compartmentalization Could Underlie Competition and Attentional Biasing of Simultaneous Visual Stimuli

Neural Information Processing Systems

Neurons in area V4 have relatively large receptive fields (RFs), so multi(cid:173) ple visual features are simultaneously "seen" by these cells. Recordings from single V 4 neurons suggest that simultaneously presented stimuli compete to set the output firing rate, and that attention acts to isolate individual features by biasing the competition in favor of the attended object. We propose that both stimulus competition and attentional bias(cid:173) ing arise from the spatial segregation of afferent synapses onto different regions of the excitable dendritic tree of V 4 neurons. The pattern of feed(cid:173) forward, stimulus-driven inputs follows from a Hebbian rule: excitatory afferents with similar RFs tend to group together on the dendritic tree, avoiding randomly located inhibitory inputs with similar RFs. The same principle guides the formation of inputs that mediate attentional mod(cid:173) ulation.


An active dendritic tree can mitigate fan-in limitations in superconducting neurons

Primavera, Bryce A., Shainline, Jeffrey M.

arXiv.org Artificial Intelligence

Superconducting electronic circuits have much to offer with regard to neuromorphic hardware. Superconducting quantum interference devices (SQUIDs) can serve as an active element to perform the thresholding operation of a neuron's soma. However, a SQUID has a response function that is periodic in the applied signal. We show theoretically that if one restricts the total input to a SQUID to maintain a monotonically increasing response, a large fraction of synapses must be active to drive a neuron to threshold. We then demonstrate that an active dendritic tree (also based on SQUIDs) can significantly reduce the fraction of synapses that must be active to drive the neuron to threshold. In this context, the inclusion of a dendritic tree provides the dual benefits of enhancing the computational abilities of each neuron and allowing the neuron to spike with sparse input activity.


Change with distance from the soma

Science

Neuroscience Hippocampal neurons receive and integrate synaptic input along their dendritic tree. Inputs located near the cell soma or in the distal dendrite contribute differently to neuronal integration by a variety of mechanisms, including NMDA receptor (NMDAR)–dependent plasticity processes. Using superresolution microscopy, single-nanoparticle imaging, and glutamate uncaging, Ferreira et al. investigated the nanoscale organization of NMDARs containing the subunits GluN2A and GluN2B along the dendritic tree. The organization and surface dynamics of GluN2B-NMDARs, but not of GluN2A-NMDARs, changed between proximal and distal clusters, with a gradual increase in receptor local density from proximal to distal dendritic segments. At the proximal dendrite, the nanoscale organization and membrane dynamics of GluN2B-NMDARs were influenced by physical interplay with the protein kinase CaMKII. Proc. Natl. Acad. Sci. U.S.A. 117 , 24526 (2020).


Active dendrites: adaptation to spike-based communication

Ujfalussy, Balazs B., Lengyel, Máté

Neural Information Processing Systems

Computational analyses of dendritic computations often assume stationary inputs to neurons, ignoring the pulsatile nature of spike-based communication between neurons and the moment-to-moment fluctuations caused by such spiking inputs. Conversely, circuit computations with spiking neurons are usually formalized without regard to the rich nonlinear nature of dendritic processing. Here we address the computational challenge faced by neurons that compute and represent analogue quantities but communicate with digital spikes, and show that reliable computation of even purely linear functions of inputs can require the interplay of strongly nonlinear subunits within the postsynaptic dendritic tree. Our theory predicts a matching of dendritic nonlinearities and synaptic weight distributions to the joint statistics of presynaptic inputs. This approach suggests normative roles for some puzzling forms of nonlinear dendritic dynamics and plasticity.


Dendritic Compartmentalization Could Underlie Competition and Attentional Biasing of Simultaneous Visual Stimuli

Archie, Kevin A., Mel, Bartlett W.

Neural Information Processing Systems

Neurons in area V4 have relatively large receptive fields (RFs), so multiple visualfeatures are simultaneously "seen" by these cells. Recordings from single V4 neurons suggest that simultaneously presented stimuli compete to set the output firing rate, and that attention acts to isolate individual features by biasing the competition in favor of the attended object. We propose that both stimulus competition and attentional biasing arisefrom the spatial segregation of afferent synapses onto different regions of the excitable dendritic tree of V4 neurons. The pattern of feedforward, stimulus-driveninputs follows from a Hebbian rule: excitatory afferents with similar RFs tend to group together on the dendritic tree, avoiding randomly located inhibitory inputs with similar RFs. The same principle guides the formation of inputs that mediate attentional modulation.


Complex-Cell Responses Derived from Center-Surround Inputs: The Surprising Power of Intradendritic Computation

Mel, Bartlett W., Ruderman, Daniel L., Archie, Kevin A.

Neural Information Processing Systems

Biophysical modeling studies have previously shown that cortical pyramidal cells driven by strong NMDA-type synaptic currents and/or containing dendritic voltage-dependent Ca or Na channels, respond more strongly when synapses are activated in several spatially clustered groups of optimal size-in comparison to the same number of synapses activated diffusely about the dendritic arbor [8]- The nonlinear intradendritic interactions giving rise to this "cluster sensitivity" property are akin to a layer of virtual nonlinear "hidden units" in the dendrites, with implications for the cellular basis of learning and memory [7, 6], and for certain classes of nonlinear sensory processing [8]- In the present study, we show that a single neuron, with access only to excitatory inputs from unoriented ONand OFFcenter cells in the LGN, exhibits the principal nonlinear response properties of a "complex" cell in primary visual cortex, namely orientation tuning coupled with translation invariance and contrast insensitivity_ We conjecture that this type of intradendritic processing could explain how complex cell responses can persist in the absence of oriented simple cell input [13]- 84 B. W. Mel, D. L. Ruderman and K. A. Archie